14 research outputs found

    Community Seismic Network

    Get PDF
    The article describes the design of the Community Seismic Network, which is a dense open seismic network based on low cost sensors. The inputs are from sensors hosted by volunteers from the community by direct connection to their personal computers, or through sensors built into mobile devices. The server is cloud-based for robustness and to dynamically handle the load of impulsive earthquake events. The main product of the network is a map of peak acceleration, delivered within seconds of the ground shaking. The lateral variations in the level of shaking will be valuable to first responders, and the waveform information from a dense network will allow detailed mapping of the rupture process. Sensors in buildings may be useful for monitoring the state-of-health of the structure after major shaking

    The PHENIX Experiment at RHIC

    Full text link
    The physics emphases of the PHENIX collaboration and the design and current status of the PHENIX detector are discussed. The plan of the collaboration for making the most effective use of the available luminosity in the first years of RHIC operation is also presented.Comment: 5 pages, 1 figure. Further details of the PHENIX physics program available at http://www.rhic.bnl.gov/phenix

    Practical Intrusion-Tolerant Networking

    No full text
    In networks such as the IP-based networks that run the Internet, nodes trust one another to properly execute routing and forwarding. When a node is compromised (i.e. Byzantine failure), this trust can be exploited by such compromised nodes to launch routing attacks that can disrupt communication throughout the network. In addition, a compromised node can drop, delay, reorder, replay, or duplicate messages, or inject its own messages into the network to consume resources. While these attacks are examples related to networking, in fact, a compromised node can perform any arbitrary action. Therefore, addressing this vulnerability requires an attack-agnostic approach that maintains network functionality even in the presence of compromised nodes. We introduce the first practical solution for intrusion-tolerant networking. Our approach guarantees well-defined semantics to applications, rather than solely routing packets, and allows multiple different semantics to coexist. Specifically, we define two semantics that fit the needs of many applications: one guarantees prioritized timely delivery, and the other guarantees reliable delivery. We introduce a Maximal Topology with Minimal Weights to prevent routing attacks, and provide generic support for source-based routing, limiting the power of the adversary. Specifically, we discuss two source-based routing techniques: K Node-Disjoint Paths, which is resilient to K-1 compromised nodes, and Constrained Flooding, which provides the optimal guarantee that it will deliver messages if there exists a correct path from source to destination. We also describe the resilient overlay architecture necessary for the deployment of these ideas and to make the solution holistic, allowing the resulting system to overcome benign faults as well as malicious and resource-consumption attacks in the underlying network. We present a formal specification of the guarantees and evaluate an implementation deployed on a global cloud spanning 12 data centers from East Asia to North America to Europe

    Toward a practical survivable intrusion tolerant replication system,”

    No full text
    Abstract-The increasing number of cyber attacks against critical infrastructures, which typically require large state and long system lifetimes, necessitates the design of systems that are able to work correctly even if part of them is compromised. We present the first practical survivable intrusion tolerant replication system, which defends across space and time using compiler-based diversity and proactive recovery, respectively. Our system supports large-state applications, and utilizes the Prime BFT protocol (providing performance guarantees under attack) with a compiler-based diversification engine. We devise a novel theoretical model that computes how resilient the system is over its lifetime based on the rejuvenation rate and the number of replicas. This model shows that we can achieve a confidence in the system of 95% over 30 years even when we transfer a state of 1 terabyte after each rejuvenation

    DELF: Safeguarding deletion correctness in Online Social Networks

    No full text
    Deletion is a core facet of Online Social Networks (OSNs). For users, deletion is a tool to remove what they have shared and control their data. For OSNs, robust deletion is both an obligation to their users and a risk when developer mistakes inevitably occur. While developers are effective at identifying high-level deletion requirements in products (e.g., users should be able to delete posted photos), they are less effective at mapping high-level requirements into concrete operations (e.g., deleting all relevant items in data stores). Without framework support, developer mistakes lead to violations of users' privacy, such as retaining data that should be deleted, deleting the wrong data, and exploitable vulnerabilities.We propose DELF, a deletion framework for modem OSNs. In DELF, developers specify deletion annotations on data type definitions, which the framework maps into asynchronous, reliable and temporarily reversible operations on backing data stores. DELF validates annotations both statically and dynamically, proactively flagging errors and suggesting fixes.We deployed DELF in three distinct OSNs, showing the feasibility of our approach. DELF detected, surfaced, and helped developers correct thousands of omissions and dozens of mistakes, while also enabling timely recovery in tens of incidents where user data was inadvertently deleted
    corecore